Relating Multimodal Imagery Data in 3D
نویسندگان
چکیده
This research develops and improves the fundamental mathematical approaches and techniques required to relate imagery and imagery derived multimodal products in 3D. Image registration, in a 2D sense, will always be limited by the 3D effects of viewing geometry on the target. Therefore, effects such as occlusion, parallax, shadowing, and terrain/building elevation can often be mitigated with even a modest amounts of 3D target modeling. Additionally, the imaged scene may appear radically different based on the sensed modality of interest; this is evident from the differences in visible, infrared, polarimetric, and radar imagery of the same site. This thesis develops a ‘model-centric’ approach to relating multimodal imagery in a 3D environment. By correctly modeling a site of interest, both geometrically and physically, it is possible to remove/mitigate some of the most difficult challenges associated with multimodal image registration. In order to accomplish this feat, the mathematical framework necessary to relate imagery to geometric models is thoroughly examined. Since geometric models may need x to be generated to apply this ‘model-centric’ approach, this research develops methods to derive 3D models from imagery and LIDAR data. Of critical note, is the implementation of complimentary techniques for relating multimodal imagery that utilize the geometric model in concert with physics based modeling to simulate scene appearance under diverse imaging scenarios. Finally, the often neglected final phase of mapping localized image registration results back to the world coordinate system model for final data archival are addressed. In short, once a target site is properly modeled, both geometrically and physically, it is possible to orient the 3D model to the same viewing perspective as a captured image to enable proper registration. If done accurately, the synthetic model’s physical appearance can simulate the imaged modality of interest while simultaneously removing the 3-D ambiguity between the model and the captured image. Once registered, the captured image can then be archived as a texture map on the geometric site model. In this way, the 3D information that was lost when the image was acquired can be regained and properly related with other datasets for data fusion and analysis.
منابع مشابه
A Multimodal Virtual Reality Interface for VTK
The object oriented Visualization Toolkit (VTK) is widely used for scientific visualization. VTK is a visualization library that provides functions for presenting 3D data. Interaction with the visualized data is done by mouse and keyboard. Support for three-dimensional and multimodal input is non-existent. This paper describes VR-VTK: a multimodal interface to VTK on a desktop virtual environme...
متن کاملMultitemporal Archaeological Imagery to Model the Progress of Excavation
The progress of excavation work has been regularly recorded by taking images during the excavation seasons of Finnish Jabal Haroun Project. This multitemporal archaeological imagery is collected during 1998-2003. Images have been taken daily from the archaeological excavation site, namely the monastic complex of St. Aaron located near Petra, in Jordan. The images have been taken with non-metric...
متن کاملObject-Based Classification of UltraCamD Imagery for Identification of Tree Species in the Mixed Planted Forest
This study is a contribution to assess the high resolution digital aerial imagery for semi-automatic analysis of tree species identification. To maximize the benefit of such data, the object-based classification was conducted in a mixed forest plantation. Two subsets of an UltraCam D image were geometrically corrected using aero-triangulation method. Some appropriate transformations were perfor...
متن کاملVisualization Techniques for 3d Multimodal Medical Datasets: a Survey by Paul Corneliu Herghelegiu
Accurate medical diagnosis based on images acquired with various medical imaging techniques commonly requires multiple images inspection. These images can be obtained using the same scanning technique but with different scanning parameters. Alternatively, the physicians can use images obtained with different scanning equipments. The visualization of multimodal data in the same rendering scene c...
متن کاملThe FOCAL Point – Multimodal Dialogue with Virtual Geospatial Displays
The Future Operations Centre Analysis Laboratory (FOCAL) at Australia’s Defence Science and Technology Organisation (DSTO) is aimed at exploring new paradigms for situation awareness (SA) and command and control (C2) in military command centres, making use of new technologies developed for simulation, virtual reality, and real-time 3D animation. Recent work includes the development of a multimo...
متن کامل